149 research outputs found
Recommended from our members
Improving Production System Performance on Parallel Architectures by Creating Constrained Copies of Rules
Production systems have pessimistically been hypothesized to contain only minimal amounts of parallelism [Gupta 1984]. However, techniques are being investigated to extract more parallelism from existing systems. Among these methods, it is desirable to find those which balance the work being performed in parallel evenly among the rules, while at the same time decrease the amount of work which must be performed sequentially in each cycle. The technique of creating constrained copies of culprit rules accomplishes both of the above goals. Production systems are plagued by occasional rules which slow down the entire execution. These rules require much more processing than others and thus cause other processors to idle while the culprit rules continue to match. By creating the constrained copies and distributing them to their own processors, each performs less work while others are busy, yielding increased parallelism, improved load balancing, and less work overall per cycle
Anagram: A Content Anomaly Detector Resistant to Mimicry Attack
In this paper, we present Anagram, a content anomaly detector that models a mixture of high-order n-grams (n > 1) designed to detect anomalous and suspicious network packet payloads. By using higher- order n-grams, Anagram can detect significant anomalous byte sequences and generate robust signatures of validated malicious packet content. The Anagram content models are implemented using highly efficient Bloom filters, reducing space requirements and enabling privacy-preserving cross-site correlation. The sensor models the distinct content flow of a network or host using a semi- supervised training regimen. Previously known exploits, extracted from the signatures of an IDS, are likewise modeled in a Bloom filter and are used during training as well as detection time. We demonstrate that Anagram can identify anomalous traffic with high accuracy and low false positive rates. Anagram’s high-order n-gram analysis technique is also resilient against simple mimicry attacks that blend exploits with normal appearing byte padding, such as the blended polymorphic attack recently demonstrated in. We discuss randomized n-gram models, which further raises the bar and makes it more difficult for attackers to build precise packet structures to evade Anagram even if they know the distribution of the local site content flow. Finally, Anagram-’s speed and high detection rate makes it valuable not only as a standalone sensor, but also as a network anomaly flow classifier in an instrumented fault-tolerant host-based environment; this enables significant cost amortization and the possibility of a symbiotic feedback loop that can improve accuracy and reduce false positive rates over time
Recommended from our members
Some Initial Thoughts about Quantum Malware
Does Quantum Malware exist? If so, what malicious activities might it perform?
Those seemingly simple questions are crucial to study and answer now before the rush to build large-scale practical quantum computers is realized
Toward Network-based DDoS Detection in Software-defined Networks
To combat susceptibility of modern computing systems to cyberattack, identifying and disrupting malicious traffic without human intervention is essential. To accomplish this, three main tasks for an effective intrusion detection system have been identified: monitor network traffic, categorize and identify anomalous behavior in near real time, and take appropriate action against the identified threat. This system leverages distributed SDN architecture and the principles of Artificial Immune Systems and Self-Organizing Maps to build a network-based intrusion detection system capable of detecting and terminating DDoS attacks in progress
Behavior-Profile Clustering for False Alert Reduction in Anomaly Detection Sensors
Anomaly detection (AD) sensors compute behavior profiles to recognize malicious or anomalous activities. The behavior of a host is checked continuously by the AD sensor and an alert is raised when the behavior deviates from its behavior profile. Unfortunately, the majority of AD sensors suffer from high volumes of false alerts either maliciously crafted by the host or originating from insufficient training of the sensor. We present a cluster-based AD sensor that relies on clusters of behavior profiles to identify anomalous behavior. The behavior of a host raises an alert only when a group of host profiles with similar behavior (cluster of behavior profiles) detect the anomaly, rather than just relying on the host's own behavior profile to raise the alert (single-profile AD sensor). A cluster-based AD sensor significantly decreases the volume of false alerts by providing a more robust model of normal behavior based on clusters of behavior profiles. Additionally, we introduce an architecture designed for the deployment of cluster-based AD sensors. The behavior profile of each network host is computed by its closest switch that is also responsible for performing the anomaly detection for each of the hosts in its subnet. By placing the AD sensors at the switch, we eliminate the possibility of hosts crafting malicious alerts. Our experimental results based on wireless behavior profiles from users in the CRAWDAD dataset show that the volume of false alerts generated by cluster-based AD sensors is reduced by at least 50% compared to single-profile AD sensors
Recommended from our members
Citizen's Attitudes about Privacy While Accessing Government Websites: Results of an Online Study
This paper reports the results of an investigation on citizens' attitudes and concerns regarding privacy and security on the Web, in general, and on the government websites they may visit, in particular. We examine to what extent those concerns can be alleviated by using a Secure Private Portal that protects citizen's personally identifying information when accessing government websites. The research project had two main goals: (a) to develop a comprehensive psychological instrument to assess citizens' attitudes and concerns regarding privacy and security on the Web; (b) to test the impact a Secure Private Portal may have on those concerns and on the way citizens use Government Websites. In order to accomplish these goals researchers from Columbia Business School and from Columbia departments of Computer Science and Psychology, developed and ran a web based survey. Participants were recruited using online advertising through Google.com and provided their responses on the web. Early analyses of the results indicate a very high level of citizens' concerns regarding privacy and security of their personal data. Some of the concerns can appropriately be addressed only by fundamental policy changes. Furthermore, the results suggest that citizens perceive those sites which use secure portals as much safer and are more likely to visit them again. The results may indicate a new strategy for the presentation and design of government websites
Recommended from our members
User-Defined Predicates in OPS5: A Needed Language Extension for Financial Expert Systems
OPS5 is widely used for expert system development in industry as well as for academic research. Its limited expressive power, however, can lead to cumbersome and inefficient code. Often a single domain rule must be encoded as a series of OPS5 rules requiring extensive performance overhead and resulting in an awkward representation of the knowledge. In the financial expert system ALEXSYS, which performs mortgage pool allocation, the lack of user-defined predicates proved to be a major obstacle, prohibiting real time performance. This work describes the addition of user-defined predicates in OPS5, supported by a patch to Carnegie-Mellon University's Common lisp OPS5 implementation. Also, the necessity of this extension is demonstrated in the context of the ALEXSYS mortgage pool allocation expert system, both in terms of increased efficiency and improved knowledge representation
Intrusion and Anomaly Detection Model Exchange for Mobile Ad-Hoc Networks
Mobile Ad-hoc NETworks (MANETs) pose unique security requirements and challenges due to their reliance on open, peer-to-peer models that often don't require authentication between nodes. Additionally, the limited processing power and battery life of the devices used in a MANET also prevent the adoption of heavy-duty cryptographic techniques. While traditional misuse-based Intrusion Detection Systems (IDSes) may work in a MANET, watching for packet dropouts or unknown outsiders is difficult as both occur frequently in both malicious and non-malicious traffic. Anomaly detection approaches hold out more promise, as they utilize learning techniques to adapt to the wireless environment and flag malicious data. The anomaly detection model can also create device behavior profiles, which peers can utilize to help determine its trustworthiness. However, computing the anomaly model itself is a time-consuming and processor-heavy task. To avoid this, we propose the use of model exchange as a device moves between different networks as a means to minimize computation and traffic utilization. Any node should be able to obtain peers' model(s) and evaluate it against its own model of "normal" behavior. We present this model, discuss scenarios in which it may be used, and provide preliminary results and a framework for future implementation
Recommended from our members
Explanation and Acquisition in Expert Systems Using Support Knowledge
There are many criteria that an expert system must meet in order to be considered successful in a domain. An important criterion is that it be able to solve problems in its domain with a satisfactory level of expertise. In addition to this an expert system should also be able to communicate well with its users. This means not only asking for relevant information when needed but also providing explanations of its reasoning process that are acceptable to a user. Furthermore, an expert system should be easily .expandable to incorporate new knowledge or correct outdated or erroneous knowledge
Incremental Evaluation of Rules and its Relationship to Parallelism
Rule interpreters usually start with an initial database and perform the inference procedure in cycles, ending with a final database. In a real time environment it is possible to receive updates to the initial database after the inference procedure has started or even after it has ended. We present an algorithm for incremental maintenance of the deductive database in the presence of such updates. Interestingly, the same algorithm is useful for parallel and distributed rule processing in the following sense. \\'hen the processors evaluating a program operate asynchronously. then they may have different views of the database. The incremental maintenance procedure we present can be used to synchronize these views
- …